23 research outputs found

    Exogenous Rewards for Promoting Cooperation in Scale-Free Networks

    Get PDF
    The design of mechanisms that encourage pro-social behaviours in populations of self-regarding agents is recognised as a major theoretical challenge within several areas of social, life and engineering sciences. When interference from external parties is considered, several heuristics have been identified as capable of engineering a desired collective behaviour at a minimal cost. However, these studies neglect the diverse nature of contexts and social structures that characterise real-world populations. Here we analyse the impact of diversity by means of scale-free interaction networks with high and low levels of clustering, and test various interference mechanisms using simulations of agents facing a cooperative dilemma. Our results show that interference on scale-free networks is not trivial and that distinct levels of clustering react differently to each interference mechanism. As such, we argue that no tailored response fits all scale-free networks and present which mechanisms are more efficient at fostering cooperation in both types of networks. Finally, we discuss the pitfalls of considering reckless interference mechanisms.Comment: 8 pages, 5 figures, to appear in the Proceedings of the Artifical Life Conference 2019, 29 July - 2 August 2019, Newcastle, Englan

    Artificial intelligence development races in heterogeneous settings

    Get PDF
    Regulation of advanced technologies such as Artificial Intelligence (AI) has become increasingly important, given the associated risks and apparent ethical issues. With the great benefits promised from being able to first supply such technologies, safety precautions and societal consequences might be ignored or shortchanged in exchange for speeding up the development, therefore engendering a racing narrative among the developers. Starting from a game-theoretical model describing an idealised technology race in a fully connected world of players, here we investigate how different interaction structures among race participants can alter collective choices and requirements for regulatory actions. Our findings indicate that, when participants portray a strong diversity in terms of connections and peer-influence (e.g., when scale-free networks shape interactions among parties), the conflicts that exist in homogeneous settings are significantly reduced, thereby lessening the need for regulatory actions. Furthermore, our results suggest that technology governance and regulation may profit from the world’s patent heterogeneity and inequality among firms and nations, so as to enable the design and implementation of meticulous interventions on a minority of participants, which is capable of influencing an entire population towards an ethical and sustainable use of advanced technologies

    Fairness and deception in human interactions with artificial agents

    No full text
    Online information ecosystems are now central to our everyday social interactions. Of the many opportunities and challenges this presents, the capacity for artificial agents to shape individual and collective human decision-making in such environments is of particular importance. In order to assess and manage the impact of artificial agents on human well-being, we must consider not only the technical capabilities of such agents, but the impact they have on human social dynamics at the individual and population level. We approach this problem by modelling the potential for artificial agents to "nudge" attitudes to fairness and cooperation in populations of human agents, who update their behavior according to a process of social learning. We show that the presence of artificial agents in a population playing the ultimatum game generates highly divergent, multi-stable outcomes in the learning dynamics of human agents' behaviour. These outcomes correspond to universal fairness (successful nudging), universal selfishness (failed nudging), and a strategy of fairness towards artificial agents and selfishness towards other human agents (unintended consequences of nudging). We then consider the consequences of human agents shifting their behavior when they are aware that they are interacting with an artificial agent. We show that under a wide range of circumstances artificial agents can achieve optimal outcomes in their interactions with human agents while avoiding deception. However we also find that, in the donation game, deception tends to make nudging easier to achieve
    corecore